-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create a cron job that calls all performance harness on cron #31480
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎ 1 Ignored Deployment
|
on: | ||
schedule: | ||
# * is a special character in YAML so you have to quote this string | ||
- cron: "30 5,17 * * *" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
||
jobs: | ||
postgres-1m-run: | ||
uses: ./.github/workflows/connector-performance-command.yml | ||
with: | ||
connector: connectors/source-postgres | ||
connector: "connectors/source-postgres" | ||
dataset: 1m |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are all dataset going to run every time? is this needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actually that's something I want to discuss with you. When we are going to support and test against full_refresh and incremental, and with additional mongoDB support, we will have maximum 18 combinations between connectors/dataset/sync mode. I feel we do not need to enumerate all datasets - what is the original purpose of defining 3 different datasets for performance testing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think 1m is giving us something that's very different than 10 or 20m.
I'm ok with leaving other datasets for manual testing where you may want to check a more specialized cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sounds good. removed 10m/20m for both connectors.
github.event.inputs.connector != 'connectors/destination-snowflake' }}" | ||
if: "${{ inputs.connector != 'connectors/source-postgres' && | ||
inputs.connector != 'connectors/source-mysql' && | ||
inputs.connector != 'connectors/destination-snowflake' }}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
make sure that mongodb from the other PR doesn't get run over during merge
What
#31365
Created a new workflow that calls connectors-performance-harness with different parameters so we can test all performance on schedule
Jobs will run in parallel and takes about 20 minutes to finish: https://github.com/airbytehq/airbyte/actions/runs/6539691823
How
Recommended reading order
x.java
y.python
🚨 User Impact 🚨
Are there any breaking changes? What is the end result perceived by the user?
For connector PRs, use this section to explain which type of semantic versioning bump occurs as a result of the changes. Refer to our Semantic Versioning for Connectors guidelines for more information. Breaking changes to connectors must be documented by an Airbyte engineer (PR author, or reviewer for community PRs) by using the Breaking Change Release Playbook.
If there are breaking changes, please merge this PR with the 🚨🚨 emoji so changelog authors can further highlight this if needed.
Pre-merge Actions
Expand the relevant checklist and delete the others.
New Connector
Community member or Airbyter
./gradlew :airbyte-integrations:connectors:<name>:integrationTest
.0.0.1
Dockerfile
has version0.0.1
README.md
bootstrap.md
. See description and examplesdocs/integrations/<source or destination>/<name>.md
including changelog with an entry for the initial version. See changelog exampledocs/integrations/README.md
Airbyter
If this is a community PR, the Airbyte engineer reviewing this PR is responsible for the below items.
Updating a connector
Community member or Airbyter
Airbyter
If this is a community PR, the Airbyte engineer reviewing this PR is responsible for the below items.
Connector Generator
-scaffold
in their name) have been updated with the latest scaffold by running./gradlew :airbyte-integrations:connector-templates:generator:generateScaffolds
then checking in your changesUpdating the Python CDK
Airbyter
Before merging:
--use-local-cdk --name=source-<connector>
as optionsairbyte-ci connectors --use-local-cdk --name=source-<connector> test
After merging: